近年来,在各种特定于任务的情况下,盲目图像质量评估(BIQA)取得了巨大的成功,这些方案呈现出不变的失真类型和评估标准。但是,由于刚性结构和学习框架,它们不能应用于交叉任务BIQA方案,在这种情况下,失真类型和评估标准在实际应用中不断变化。本文提出了一个可扩展的增量学习框架(SILF),该框架可以在多个评估任务中依次执行BIQA,具有有限的记忆能力。更具体地说,我们开发了动态参数隔离策略,以依次更新特定于任务的参数子集,这些参数子集彼此之间并非重叠。每个参数子集都会暂时解决,以记住对其相应任务的一个评估偏好,并且可以在以下BIQA中自适应地重复使用先前的参数子集,以根据任务相关性实现更好的性能。为了抑制顺序任务学习中记忆容量的不受限制扩展,我们通过从先前解决的参数子集中逐渐和选择性地修剪不重要的神经元来开发可扩展的内存单元,这使我们能够忘记以前的经验的一部分,并释放有限的内存能力,以适应适应新的新任务。对11个IQA数据集进行的广泛实验表明,我们提出的方法在交叉任务BIQA中的其他最新方法显着优于其他最新方法。
translated by 谷歌翻译
结肠直肠癌(CRC)是世界上最常见的致命癌症之一。果切除术可以有效地中断腺瘤的进展到腺癌,从而降低了CRC发育的风险。结肠镜检查是找到结肠息肉的主要方法。然而,由于息肉的不同尺寸和息肉和周围的粘膜之间的阴影不明确,因此精确地对分段息肉挑战。为了解决这个问题,我们设计了一个用于精确的息肉分割的边界分布引导网络(BDG-Net)。具体地,在理想边界分布图(BDM)的监督下,我们使用边界分布生成模块(BDGM)来聚合高级功能并生成BDM。然后,BDM被发送到边界分布引导解码器(BDGD)作为互补空间信息以引导息肉分割。此外,BDGD采用了多尺度特征交互策略,以提高不同尺寸的息肉的分割精度。广泛的定量和定性评估展示了我们模型的有效性,这在五个公共息肉数据集上显着优于最先进的模型,同时保持低计算复杂性。
translated by 谷歌翻译
The success of deep learning heavily relies on large-scale data with comprehensive labels, which is more expensive and time-consuming to fetch in 3D compared to 2D images or natural languages. This promotes the potential of utilizing models pretrained with data more than 3D as teachers for cross-modal knowledge transferring. In this paper, we revisit masked modeling in a unified fashion of knowledge distillation, and we show that foundational Transformers pretrained with 2D images or natural languages can help self-supervised 3D representation learning through training Autoencoders as Cross-Modal Teachers (ACT). The pretrained Transformers are transferred as cross-modal 3D teachers using discrete variational autoencoding self-supervision, during which the Transformers are frozen with prompt tuning for better knowledge inheritance. The latent features encoded by the 3D teachers are used as the target of masked point modeling, wherein the dark knowledge is distilled to the 3D Transformer students as foundational geometry understanding. Our ACT pretrained 3D learner achieves state-of-the-art generalization capacity across various downstream benchmarks, e.g., 88.21% overall accuracy on ScanObjectNN. Codes will be released at https://github.com/RunpeiDong/ACT.
translated by 谷歌翻译
Simile recognition involves two subtasks: simile sentence classification that discriminates whether a sentence contains simile, and simile component extraction that locates the corresponding objects (i.e., tenors and vehicles). Recent work ignores features other than surface strings. In this paper, we explore expressive features for this task to achieve more effective data utilization. Particularly, we study two types of features: 1) input-side features that include POS tags, dependency trees and word definitions, and 2) decoding features that capture the interdependence among various decoding decisions. We further construct a model named HGSR, which merges the input-side features as a heterogeneous graph and leverages decoding features via distillation. Experiments show that HGSR significantly outperforms the current state-of-the-art systems and carefully designed baselines, verifying the effectiveness of introduced features. Our code is available at https://github.com/DeepLearnXMU/HGSR.
translated by 谷歌翻译
Current natural language processing (NLP) models such as BERT and RoBERTa have achieved high overall performance, but they often make systematic errors due to bias or certain difficult features to learn. Thus research on slice detection models (SDM) which automatically identifies underperforming groups of datapoints has gradually caught more attention, which aims at both understanding model behaviors and providing insights for future model training and designing. However, there is little systematic research on SDM and quantitative evaluation of its assessment for NLP models. Our paper fills this gap by proposing "Discover, Explanation, Improvement" framework that discovers coherent and underperforming groups of datapoints and unites datapoints of each slice under human-understandable concepts; it also provides comprehensive evaluation tasks and the corresponding quantitative metrics, which enable convenient comparison for future works. Results show that our framework can accurately select error-prone datapoints with informative semantic features that summarize error patterns, based on which it directly boosts model performance by an average of 2.85 points based on trained models without tuning any parameters across multiple datasets.
translated by 谷歌翻译
在医学图像分析中,许多疾病的微妙视觉特征要具有挑战性,尤其是由于缺乏配对数据。例如,在温和的阿尔茨海默氏病(AD)中,很难从纯成像数据中观察到脑组织萎缩,尤其是没有配对的AD和认知正常(CN)数据以进行比较。这项工作介绍了疾病发现甘(Didigan),这是一种基于弱的基于风格的框架,可发现和可视化细微的疾病特征。 Didigan了解了AD和CN视觉特征的疾病歧管,并将此歧管采样的样式代码施加到解剖结构“蓝图”上,以综合配对AD和CN磁共振图像(MRIS)。为了抑制生成的AD和CN对之间的非疾病相关变化,Didigan利用具有循环一致性和抗偏置的结构约束来实施解剖对应关系。当对阿尔茨海默氏病神经影像学计划(ADNI)数据集进行测试时,Didigan通过合成的配对AD和CN扫描显示了关键的AD特征(减少海马体积,心室增大和皮质结构的萎缩)。定性结果通过自动化的大脑体积分析来支持,其中还测量了脑组织结构的系统成对降低
translated by 谷歌翻译
预训练的语言模型在对话任务上取得了长足的进步。但是,这些模型通常在表面对话文本上进行训练,因此被证明在理解对话环境的主要语义含义方面是薄弱的。我们研究抽象含义表示(AMR)作为预训练模型的明确语义知识,以捕获预训练期间对话中的核心语义信息。特别是,我们提出了一个基于语义的前训练框架,该框架通过三个任务来扩展标准的预训练框架(Devlin等,2019)。根据AMR图表示。关于聊天聊天和面向任务的对话的理解的实验表明了我们的模型的优势。据我们所知,我们是第一个利用深层语义表示进行对话预训练的人。
translated by 谷歌翻译
我们为致密氢的方程式提供了基于深层生成模型的变化自由能方法。我们采用归一化流网络来对质子玻尔兹曼分布和费米子神经网络进行建模,以在给定的质子位置对电子波函数进行建模。通过共同优化两个神经网络,我们达到了与先前的电子蒙特卡洛计算相当的变异自由能。我们的结果表明,与先前的蒙特卡洛和从头算分子动力学数据相比,行星条件下的氢甚至更浓密,这远离经验化学模型的预测。获得可靠的密集氢状态方程,尤其是直接进入熵和自由能,为行星建模和高压物理学研究开辟了新的机会。
translated by 谷歌翻译
机器学习辅助建模的原子势能表面(PES)正在彻底改变分子模拟的领域。随着高质量电子结构数据的积累,可以在所有可用数据上鉴定的模型,并在下游任务上以较小的额外努力进行填充,这将使该领域进入新阶段。在这里,我们提出了DPA-1,这是一种具有新颖的注意机制的深层潜在模型,该模型非常有效地表示原子系统的构象和化学空间并学习PES。我们在许多系统上测试了DPA-1,并且与现有基准相比,观察到了卓越的性能。当在包含56个元素的大规模数据集上进行预估计时,DPA-1可以成功应用于各种下游任务,并有很大的提高样品效率。令人惊讶的是,对于不同的元素,学习的类型嵌入参数在潜在空间中形成$螺旋$,并具有自然对应的元素性表位,显示了预审预周化的DPA-1模型的有趣解释性。
translated by 谷歌翻译
深度学习的成功通常伴随着神经网络深度的增长。但是,传统培训方法仅在最后一层监督神经网络并逐层传播,这导致了优化中间层的困难。最近,已经提出了深层监督,以在深神经网络的中间层中添加辅助分类器。通过通过监督任务损失优化这些辅助分类器,可以将监督直接应用于浅层层。但是,深层监督与众所周知的观察结果冲突,即浅层学习低级特征,而不是任务偏向的高级语义特征。为了解决这个问题,本文提出了一个名为“对比深度监督”的新型培训框架,该框架通过基于增强的对比学习来监督中间层。具有11个模型的九个流行数据集的实验结果证明了其对监督学习,半监督学习和知识蒸馏中一般图像分类,细粒度的图像分类和对象检测的影响。代码已在Github发布。
translated by 谷歌翻译